AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
NF4 Quantization

# NF4 Quantization

Molmo 7B D 0924 NF4
Apache-2.0
The 4Bit quantized version of Molmo-7B-D-0924, which reduces VRAM usage through the NF4 quantization strategy and is suitable for environments with limited VRAM.
Image-to-Text Transformers
M
Scoolar
1,259
1
Llama 2 7b Hf 4bit 64rank
MIT
The LoftQ (LoRA Fine-tuning Aware Quantization) model provides a quantized backbone network and LoRA adapters, specifically designed for LoRA fine-tuning to improve the fine-tuning performance and efficiency of large language models during the quantization process.
Large Language Model Transformers English
L
LoftQ
1,754
2
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase